119 research outputs found

    Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding

    Full text link
    Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm (i.e., successive interference cancelation (SIC)). To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during pre-processing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is two-fold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32

    Worst-Case Hermite-Korkine-Zolotarev Reduced Lattice Bases

    Get PDF
    The Hermite-Korkine-Zolotarev reduction plays a central role in strong lattice reduction algorithms. By building upon a technique introduced by Ajtai, we show the existence of Hermite-Korkine-Zolotarev reduced bases that are arguably least reduced. We prove that for such bases, Kannan's algorithm solving the shortest lattice vector problem requires d^{\frac{d}{2\e}(1+o(1))} bit operations in dimension dd. This matches the best complexity upper bound known for this algorithm. These bases also provide lower bounds on Schnorr's constants αd\alpha_d and βd\beta_d that are essentially equal to the best upper bounds. Finally, we also show the existence of particularly bad bases for Schnorr's hierarchy of reductions

    Chiffrement avancé à partir du problème Learning With Errors

    Get PDF
    National audienceLe problèmeLearning With Errors (LWE) est algorithmiquement difficile pour des instances aléatoires. Il a été introduit par Oded Regev en 2005 et, depuis lors, il s'est avéré très utile pour construire des primitives cryptographiques, pour assurer la confidentialité de l'information. Dans ce chapitre, nous présenterons le problème LWE et illustrerons sa richesse, en décrivant des schémas de chiffrement avancés pouvant être prouvés au moins aussi sûrs que LWE est difficile. Nous rappellerons le concept fondamental de chiffrement, puis nous nous focaliserons sur les notions de chiffrement fondé sur l'identité et de chiffrement par attributs

    Analyse numérique et réduction de réseaux

    Get PDF
    29 pagesNational audienceL'algorithmique des réseaux euclidiens est un outil fréquemment utilisé en informatique et en mathématiques. Elle repose essentiellement sur la réduction LLL qu'il est donc important de rendre aussi efficace que possible. Une approche initiée par Schnorr consiste à effectuer des calculs approchés pour estimer les orthogonalisations de Gram-Schmidt sous-jacentes. Sans approximations, ces calculs dominent le coût de la réduction. Récemment, des outils classiques d'analyse numérique ont été revisités et améliorés, pour exploiter plus systématiquement l'idée de Schnorr et réduire les coûts. Nous décrivons ces développements, notamment comment l'algorithmique en nombres flottants peut être introduite à plusieurs niveaux dans la réduction

    Sanitization of FHE ciphertexts

    Get PDF
    By definition, fully homomorphic encryption (FHE) schemes support homomorphic decryption, and all known FHE constructions are bootstrapped from a Somewhat Homomorphic Encryption (SHE) scheme via this technique. Additionally, when a public key is provided, ciphertexts are also re-randomizable, e.g., by adding to them fresh encryptions of 0. From those two operations we devise an algorithm to sanitize a ciphertext, by making its distribution canonical. In particular, the distribution of the ciphertext does not depend on the circuit that led to it via homomorphic evaluation, thus providing circuit privacy in the honest-but-curious model. Unlike the previous approach based on noise flooding, our approach does not degrade much the security/efficiency trade-off of the underlying FHE. The technique can be applied to all lattice-based FHE proposed so far, without substantially affecting their concrete parameters

    Perturbation Analysis of the QR Factor R in the Context of LLL Lattice Basis Reduction

    Get PDF
    ... \ud computable notion of reduction of basis of a Euclidean lattice that is now commonly referred to as LLLreduction. The precise definition involves the R-factor of the QR factorisation of the basis matrix. A natural mean of speeding up the LLL reduction algorithm is to use a (floating-point) approximation to the R-factor. In the present article, we investigate the accuracy of the factor R of the QR factorisation of an LLL-reduced basis. The results we obtain should be very useful to devise LLL-type algorithms relying on floating-point approximations

    A Binary Recursive Gcd Algorithm

    Get PDF
    The binary algorithm is a variant of the Euclidean algorithm that performs well in practice. We present a quasi-linear time recursive algorithm that computes the greatest common divisor of two integers by simulating a slightly modified version of the binary algorithm. The structure of the recursive algorithm is very close to the one of the well-known Knuth-Schönhage fast gcd algorithm, but the description and the proof of correctness are significantly simpler in our case. This leads to a simplification of the implementation and to better running times

    Making NTRUEncrypt and NTRUSign as Secure as Standard Worst-Case Problems over Ideal Lattices

    Get PDF
    NTRUEncrypt, proposed in 1996 by Hoffstein, Pipher and Silverman, is the fastest known lattice-based encryption scheme. Its moderate key-sizes, excellent asymptotic performance and conjectured resistance to quantum computers make it a desirable alternative to factorisation and discrete-log based encryption schemes. However, since its introduction, doubts have regularly arisen on its security and that of its digital signature counterpart. In the present work, we show how to modify NTRUEncrypt and NTRUSign to make them provably secure in the standard (resp. random oracle) model, under the assumed quantum (resp. classical) hardness of standard worst-case lattice problems, restricted to a family of lattices related to some cyclotomic fields. Our main contribution is to show that if the secret key polynomials of the encryption scheme are selected from discrete Gaussians, then the public key, which is their ratio, is statistically indistinguishable from uniform over its range. We also show how to rigorously extend the encryption secret key into a signature secret key. The security then follows from the already proven hardness of the R-SIS and R-LWE problems

    Floating-Point LLL Revisited

    Get PDF
    Everybody knows the Lenstra-Lenstra-Lovász lattice basis reduction algorithm (LLL), which has proved invaluable in public-key cryptanalysis and in many other fields. Given an integer dd-dimensional lattice basis which vectors have norms smaller than BB, LLL outputs a so-called LLL-reduced basis in time O(d6log3B)O(d^6 log^3 B), using arithmetic operations on integers of bit-length O(dO(d log B)B). This worst-case complexity is problematic for lattices arising in cryptanalysis where dd or/and log BB are often large. As a result, the original LLL is almost never used in practice. Instead, one applies floating-point variants of LLL, where the long-integer arithmetic required by Gram-Schmidt orthogonalisation (central in LLL) is replaced by floating-point arithmetic. Unfortunately, this is known to be unstable in the worst-case: the usual floating-point LLL is not even guaranteed to terminate, and the output basis may not be LLL-reduced at all. In this article, we introduce the LLL2^2 algorithm, a new and natural floating-point variant of LLL which provably outputs LLL-reduced bases in polynomial time O(d5(dO(d^5(d + log BB) log B)B). This is the first LLL algorithm which running time provably grows only quadratically with respect to log BB without fast integer arithmetic, like the famous Gaussian and Euclidean algorithms. The growth is cubic for all other LLL algorithms known

    Low-dimensional lattice basis reduction revisited

    Get PDF
    International audienceLattice reduction is a geometric generalization of the problem of computing greatest common divisors. Most of the interesting algorithmic problems related to lattice reduction are NP-hard as the lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm, because it is a straightforward generalization of an old two-dimensional algorithm of Lagrange, usually known as Gauss' algorithm, and which is very similar to Euclid's gcd algorithm. Our results are two-fold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: the output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may be arbitrarily bad as it may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. We propose two different analyzes: a global approach based on the geometry of the current basis when the length decrease stalls, and a local approach showing directly that a significant length decrease must occur every O(1) consecutive steps. Our analyzes simplify Semaev's analysis in dimensions two and three, and unify the cases of dimensions two to four. Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm
    • …
    corecore